120 research outputs found
Challenges of the inconsistency regime: Novel debiasing methods for missing data models
We study semi-parametric estimation of the population mean when data is
observed missing at random (MAR) in the "inconsistency regime", in
which neither the outcome model nor the propensity/missingness model can be
estimated consistently. Consider a high-dimensional linear-GLM specification in
which the number of confounders is proportional to the sample size. In the case
, past work has developed theory for the classical AIPW estimator in
this model and established its variance inflation and asymptotic normality when
the outcome model is fit by ordinary least squares. Ordinary least squares is
no longer feasible in the case studied here, and we also demonstrate
that a number of classical debiasing procedures become inconsistent. This
challenge motivates our development and analysis of a novel procedure: we
establish that it is consistent for the population mean under proportional
asymptotics allowing for , and also provide confidence intervals for the
linear model coefficients. Providing such guarantees in the inconsistency
regime requires a new debiasing approach that combines penalized M-estimates of
both the outcome and propensity/missingness models in a non-standard way.Comment: 89 pages, 6 figure
Robust tracking design for uncertain MIMO systems using proportionalâintegral controller of order v
AbstractThis paper provides a systematic method to design robust tracking controllers of reference signals with bounded derivatives of order ν for uncertain multiâinput multiâoutput (MIMO) systems with bounded parametric uncertainties, in particular, of rational multiâaffine type, and/or in presence of disturbances with bounded derivatives of order ν. The proposed controllers have stateâfeedback structures combined with proportionalâintegral regulators of order ν (PIν). Theoretical tools and systematic methodologies are provided to effectively design robust controllers for the considered systems, also in case of additional bounded nonlinearities and/or not directly measurable states. Applicability and efficiency of the proposed methods are validated through three examples: the first one is theoretical and useful to validate the proposed methodology, the second case study presents a metalâcutting problem for an industrial robot, and the third example deals with a composite robot, such as a milling machine
The Lasso with general Gaussian designs with applications to hypothesis testing
The Lasso is a method for high-dimensional regression, which is now commonly
used when the number of covariates is of the same order or larger than the
number of observations . Classical asymptotic normality theory is not
applicable for this model due to two fundamental reasons: The regularized
risk is non-smooth; The distance between the estimator and the true parameters vector cannot be
neglected. As a consequence, standard perturbative arguments that are the
traditional basis for asymptotic normality fail.
On the other hand, the Lasso estimator can be precisely characterized in the
regime in which both and are large, while is of order one. This
characterization was first obtained in the case of standard Gaussian designs,
and subsequently generalized to other high-dimensional estimation procedures.
Here we extend the same characterization to Gaussian correlated designs with
non-singular covariance structure. This characterization is expressed in terms
of a simpler ``fixed design'' model. We establish non-asymptotic bounds on the
distance between distributions of various quantities in the two models, which
hold uniformly over signals in a suitable sparsity class,
and values of the regularization parameter.
As applications, we study the distribution of the debiased Lasso, and show
that a degrees-of-freedom correction is necessary for computing valid
confidence intervals
Local convexity of the TAP free energy and AMP convergence for Z2-synchronization
We study mean-field variational Bayesian inference using the TAP approach,
for Z2-synchronization as a prototypical example of a high-dimensional Bayesian
model. We show that for any signal strength (the weak-recovery
threshold), there exists a unique local minimizer of the TAP free energy
functional near the mean of the Bayes posterior law. Furthermore, the TAP free
energy in a local neighborhood of this minimizer is strongly convex.
Consequently, a natural-gradient/mirror-descent algorithm achieves linear
convergence to this minimizer from a local initialization, which may be
obtained by a finite number of iterates of Approximate Message Passing (AMP).
This provides a rigorous foundation for variational inference in high
dimensions via minimization of the TAP free energy.
We also analyze the finite-sample convergence of AMP, showing that AMP is
asymptotically stable at the TAP minimizer for any , and is
linearly convergent to this minimizer from a spectral initialization for
sufficiently large . Such a guarantee is stronger than results
obtainable by state evolution analyses, which only describe a fixed number of
AMP iterations in the infinite-sample limit.
Our proofs combine the Kac-Rice formula and Sudakov-Fernique Gaussian
comparison inequality to analyze the complexity of critical points that satisfy
strong convexity and stability conditions within their local neighborhoods.Comment: 56 page
Majorant-Based Control Methodology for Mechatronic and Transportation Processes
This paper provides a unified approach via majorant systems, which allows one to easily design a family of robust, smooth and effective control laws of proportional - h order integral - k order derivative (PIhDk) -type for broad classes of uncertain nonlinear multi-input multi-output (MIMO) systems, including mechatronic and transportation processes with ideal or real actuators, subject to bounded disturbances and measurement errors. The proposed control laws are simple to design and implement and are used, acting on a single design parameter, to track a sufficiently smooth but generic reference signal, yielding a tracking error norm less than a prescribed value, with a good transient phase and feasible control signals, despite the presence of disturbances, parametric and structural uncertainties, measurement errors, and in case of real actuators and amplifiers. Moreover, some guidelines to easily design the proposed controllers are given. Finally, the stated unified methodology and various performance comparisons are illustrated and validated in two case studies
Mean-field variational inference with the TAP free energy: Geometric and statistical properties in linear models
We study mean-field variational inference in a Bayesian linear model when the
sample size n is comparable to the dimension p. In high dimensions, the common
approach of minimizing a Kullback-Leibler divergence from the posterior
distribution, or maximizing an evidence lower bound, may deviate from the true
posterior mean and underestimate posterior uncertainty. We study instead
minimization of the TAP free energy, showing in a high-dimensional asymptotic
framework that it has a local minimizer which provides a consistent estimate of
the posterior marginals and may be used for correctly calibrated posterior
inference. Geometrically, we show that the landscape of the TAP free energy is
strongly convex in an extensive neighborhood of this local minimizer, which
under certain general conditions can be found by an Approximate Message Passing
(AMP) algorithm. We then exhibit an efficient algorithm that linearly converges
to the minimizer within this local neighborhood. In settings where it is
conjectured that no efficient algorithm can find this local neighborhood, we
prove analogous geometric properties for a local minimizer of the TAP free
energy reachable by AMP, and show that posterior inference based on this
minimizer remains correctly calibrated.Comment: 79 pages, 5 figure
Maximum Mean Discrepancy Meets Neural Networks: The Radon-Kolmogorov-Smirnov Test
Maximum mean discrepancy (MMD) refers to a general class of nonparametric
two-sample tests that are based on maximizing the mean difference over samples
from one distribution versus another , over all choices of data
transformations living in some function space . Inspired by
recent work that connects what are known as functions of (RBV) and neural networks (Parhi and Nowak, 2021, 2023), we study
the MMD defined by taking to be the unit ball in the RBV space of
a given smoothness order . This test, which we refer to as the
(RKS) test, can be viewed as a
generalization of the well-known and classical Kolmogorov-Smirnov (KS) test to
multiple dimensions and higher orders of smoothness. It is also intimately
connected to neural networks: we prove that the witness in the RKS test -- the
function achieving the maximum mean difference -- is always a ridge spline
of degree , i.e., a single neuron in a neural network. This allows us to
leverage the power of modern deep learning toolkits to (approximately) optimize
the criterion that underlies the RKS test. We prove that the RKS test has
asymptotically full power at distinguishing any distinct pair of
distributions, derive its asymptotic null distribution, and carry out extensive
experiments to elucidate the strengths and weakenesses of the RKS test versus
the more traditional kernel MMD test
- âŚ